feat: improve AI pipeline robustness, add OpenRouter and Grok support#54
feat: improve AI pipeline robustness, add OpenRouter and Grok support#54
Conversation
- Rewrite analysis prompts for deeper emotional insight - Add robust JSON schema validation and retry logic for LLM responses - Add OpenRouter, Grok, and Anthropic provider integrations - Add client-side validation for specific AI provider API key formats - Render new coaching advice and growth area insights on the dashboard - Handle malformed LLM JSON responses and timeouts gracefully
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
| } catch (err) { | ||
| clearTimeout(timeoutId); | ||
| let safeKey = apiKey ? apiKey.substring(0, 4) + '...' : 'none'; | ||
| console.error(`LLM call failed for ${currentProvider} with key ${safeKey}: ${err.message}`); |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 days ago
In general, to fix clear-text logging of sensitive information, remove the sensitive data from log messages or replace it with a non-sensitive placeholder. If you need to distinguish different calls for debugging, use non-secret identifiers (e.g., a generated request ID) instead of secret material.
In this specific case, the problematic behavior is in the catch block of callLLM, where safeKey is derived from the potentially sensitive apiKey and interpolated into a console.error message. The safest change that preserves existing functionality is to stop including any portion of the key in the log. We can remove the safeKey variable and change the log message to not reference the key at all, retaining currentProvider and the error message. This change is confined to the catch block around lines 187–192 in functions/api/analyze.js and requires no new imports or helper methods.
Concretely:
- Delete the line that computes
safeKey. - Update the
console.errorcall to exclude the key, e.g.,console.error(\LLM call failed for ${currentProvider}: ${err.message}`);`.
| @@ -186,8 +186,7 @@ | ||
| return parsed; | ||
| } catch (err) { | ||
| clearTimeout(timeoutId); | ||
| let safeKey = apiKey ? apiKey.substring(0, 4) + '...' : 'none'; | ||
| console.error(`LLM call failed for ${currentProvider} with key ${safeKey}: ${err.message}`); | ||
| console.error(`LLM call failed for ${currentProvider}: ${err.message}`); | ||
| return null; | ||
| } | ||
| }; |
What
Improved the AI analysis output quality and robustness by implementing stricter JSON schemas, retry logic, and fallback responses. Added support for new AI providers (OpenRouter, Grok, Anthropic, Gemini) with specific client-side API key validation.
Why
This makes the analysis output significantly better for users, adding actionable coaching advice and growth areas while ensuring the application handles LLM non-determinism and timeouts gracefully. The addition of new BYOK providers expands options for privacy-conscious users.
Changes
functions/api/analyze.js— Added robustcallLLMlogic withAbortControllertimeouts, structured JSON schema validation, and provider-specific prompt parsing.dashboard.html— Updated UI to display newly parsed fields:growth_areasandcoaching_advice.static/js/app.js— Enhanced client-side validation to correctly identify API key formats for each provider before attempting calls.static/js/dashboard.js— Mapped the new JSON fields to the UI components.Prompt Changes
Created a
PROVIDER_SYSTEM_PROMPTSobject to tailor system instructions to each LLM's preferred format (e.g., Anthropic XML tags). Enforced a strict JSON output schema and added a retry mechanism that appends a critical strict-JSON warning if the first attempt fails.New Providers
x-api-key.gemini-1.5-flashendpoint.Error Handling
Added comprehensive
try/catchlogic around LLM calls withsetTimeoutdrivenAbortControllersto prevent hanging requests. Safely masked API keys in error logs to maintain privacy standards. Added a fallback JSON report if all generation attempts fail.PR created automatically by Jules for task 14913244531174148368 started by @rixabhh